Narrative Visualizations Best Practices and Evaluation: A Systematic Literature Review

¶

Abstract: In recent years there has been a growing interest in integrating data visualizations into narrative stories, as an effective way of conveying information and knowledge. By leveraging the best practices established in the literature, narrative visualizations can reduce the cognitive workload associated with chart comprehension. However, it is also a critical and challenging to assess the results; several methodologies have been proposed to assess visualizations. In this article, we present a systematic literature review of ninety-five data storytelling and information visualization studies. Our goal is to collect and summarize current definitions of “data storytelling” reported in the literature, the best practices for designing narrative visualizations as well as evaluation criteria and methods to assess them. We contribute by deriving a working definition of data storytelling, distinguishing among the concepts involved and provide an overview of design guidelines to assist practitioners and researchers to create narrative visualizations. In addition, we characterize the main evaluation criteria and methods. Our findings highlight the need for more out-of-the-box, ready to use evaluation tools that allow a rapid and iterative assessment of narrative visualizations.

Keywords: information visualization, data storytelling, narrative visualizations, evaluation, systematic review.

1. Introduction¶

Data visualization has become essential to understand large datasets and communicate findings. As a subfield of visualization research [1], Information Visualization focuses on the visual representations of abstract data [2][3] to enhance understanding and to support and amplify cognition [4].

Storytelling has been used for a long time as an effective way of conveying information and knowledge. Stories aid memory and recall by embedding information into characters, settings, relationships, and events [5], and a narrative is what gives shape to a story [6]; hence the importance of integrating data visualization into narrative stories. In structured contexts, researchers can use these stories to support discussion, decision making and process analysis [7].

Narrative visualizations that leverage best practices established in the literature can reduce the cognitive workload associated with chart comprehension [8], [9] and prompt positive decision-making [10]. However, the development and dissemination of guidelines for their design has been scarce. As demand for narrative visualizations increases, so does the need for standards to support their creation. By understanding the impact of specific visual encodings on performance, we can assist end users in making informed, effective decisions.

In recent years, evaluation has emerged as a central and challenging issue in the field of visualization [11], [12]. There is a diverse set of qualitative and quantitative methods for evaluating different aspects of data-driven stories [11], some of which include controlled experiments, usability tests and case studies [13]. But these methods only focus on a visualization’s ability to communicate facts. Recent studies by Dimara et. al. suggest that people can make irrational decisions even when they properly understood the data [14] and good performance with analytic tasks does not guarantee the same in decision making[15]. Researchers aim to move beyond this evaluation approach to assess the utility of a visualization. As Matzen et. al. [16] indicate, it would be valuable to have evaluation tools that can be deployed rapidly and iteratively during the design process to assess visualizations prior to conducting a user study.

Motivated by this, we present a systematic literature review (SLR) of ninety-five data storytelling and information visualization studies. Our goal is to collect and summarize current definitions of “data storytelling” reported in the literature, best practices for designing narrative visualizations as well as evaluation criteria and methods to assess them.

The rest of this paper is organized as follows: Section 2 summarizes background and related works, Section 3 describes the methodology for conducting this SLR, Section 4 reports the results, Section 5 presents the discussion for each research question as well as the threats to validity, and Section 6 concludes and outlines future work.